Goto

Collaborating Authors

 Holiday




Deep Learning-Based Forecasting of Boarding Patient Counts to Address ED Overcrowding

Vural, Orhun, Ozaydin, Bunyamin, Booth, James, Lindsey, Brittany F., Ahmed, Abdulaziz

arXiv.org Artificial Intelligence

This study presents a deep learning-based framework for predicting emergency department (ED) boarding counts six hours in advance using only operational and contextual data, without patient-level information. Data from ED tracking systems, inpatient census, weather, holidays, and local events were aggregated hourly and processed with comprehensive feature engineering. The mean ED boarding count was 28.7 (standard deviation = 11.2). Multiple deep learning models, including ResNetPlus, TSTPlus, and TSiTPlus, were trained and optimized using Optuna, with TSTPlus achieving the best results (mean absolute error = 4.30, mean squared error = 29.47, R2 = 0.79). The framework accurately forecasted boarding counts, including during extreme periods, and demonstrated that broader input features improve predictive accuracy. This approach supports proactive hospital management and offers a practical method for mitigating ED overcrowding.


Drone mishap during Orlando holiday aerial show sends child to hospital

FOX News

Video shows the moment drones started falling from the sky during a drone show at Eola Lake in Orlando, Florida on Dec. 21, 2024. A child was hospitalized on Saturday after being hit by a drone that was part of an Orlando, Florida holiday drone show. According to the Orlando Fire Department, a 7-year-old boy was transported to the hospital because of injuries sustained from the falling drones, FOX 35 in Orlando reported. In a video posted online by X user MosquitoCoFl, hundreds of drones being used as part of an aerial light show appeared to be flying into position before several started falling from the sky before slamming to the ground. A man could be heard saying to children nearby, "Oh no! I don't believe they're supposed to be falling."


Latent Diffusion for Language Generation

Lovelace, Justin, Kishore, Varsha, Wan, Chao, Shekhtman, Eliot, Weinberger, Kilian Q.

arXiv.org Artificial Intelligence

Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing pretrained language models. We view diffusion and existing language models as complementary. We demonstrate that encoder-decoder language models can be utilized to efficiently learn high-quality language autoencoders. We then demonstrate that continuous diffusion models can be learned in the latent space of the language autoencoder, enabling us to sample continuous latent representations that can be decoded into natural language with the pretrained decoder. We validate the effectiveness of our approach for unconditional, class-conditional, and sequence-to-sequence language generation. We demonstrate across multiple diverse data sets that our latent language diffusion models are significantly more effective than previous diffusion language models.